π Comprehensive Portfolio AnalysisΒΆ
Ferhat's Investment Portfolio - February 2026ΒΆ
This notebook provides a complete analysis of your portfolio including:
- Portfolio Overview - Composition & allocation
- PyPortfolioOpt Analysis - Efficient frontier, optimal portfolios
- Riskfolio-Lib Analysis - Advanced risk metrics, HRP
- QuantStats Analysis - Performance tearsheet, metrics
- Benchmark Comparison - S&P 500, Nasdaq 100, MSCI World
- Lazy Portfolio Comparison - Ray Dalio, Buffett, 60/40, Yale, Shiller, ARK
- Summary & Recommendations
InΒ [1]:
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import yfinance as yf
import plotly.graph_objects as go
import plotly.express as px
from plotly.subplots import make_subplots
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import quantstats as qs
from pypfopt import EfficientFrontier, risk_models, expected_returns, plotting
from pypfopt.discrete_allocation import DiscreteAllocation, get_latest_prices
import riskfolio as rp
from datetime import datetime, timedelta
import scipy.stats as stats
pd.set_option('display.max_columns', None)
pd.set_option('display.float_format', '{:.4f}'.format)
print("β
All libraries loaded successfully")
β All libraries loaded successfully
InΒ [2]:
# Portfolio holdings from IB screenshots
# Tickers mapped to yfinance format
portfolio_raw = {
'GOOGL': {'weight': 8.23, 'yf_ticker': 'GOOGL', 'name': 'Alphabet Inc', 'unrealized_pl': 8.40},
'SGLN': {'weight': 7.80, 'yf_ticker': 'SGLN.L', 'name': 'WisdomTree Physical Swiss Gold', 'unrealized_pl': 46.5},
'AMZN': {'weight': 6.28, 'yf_ticker': 'AMZN', 'name': 'Amazon.com', 'unrealized_pl': -6.71},
'KGC': {'weight': 5.51, 'yf_ticker': 'KGC', 'name': 'Kinross Gold', 'unrealized_pl': 32.0},
'PLS': {'weight': 5.39, 'yf_ticker': 'PLS.AX', 'name': 'Pilbara Minerals', 'unrealized_pl': 66.3},
'BARC': {'weight': 4.87, 'yf_ticker': 'BARC.L', 'name': 'Barclays', 'unrealized_pl': 24.6},
'OKLO': {'weight': 4.52, 'yf_ticker': 'OKLO', 'name': 'Oklo Inc (Nuclear)', 'unrealized_pl': 5.39},
'RR': {'weight': 4.38, 'yf_ticker': 'RR.L', 'name': 'Rolls-Royce Holdings', 'unrealized_pl': 149.0},
'LEU': {'weight': 3.56, 'yf_ticker': 'LEU', 'name': 'Centrus Energy', 'unrealized_pl': 28.5},
'NVDA': {'weight': 3.32, 'yf_ticker': 'NVDA', 'name': 'NVIDIA', 'unrealized_pl': 92.7},
'BABA': {'weight': 2.43, 'yf_ticker': 'BABA', 'name': 'Alibaba Group', 'unrealized_pl': 6.32},
'WYFI': {'weight': 2.36, 'yf_ticker': 'WYFI', 'name': 'Xtreme One Entertainment', 'unrealized_pl': 6.99},
'XOM': {'weight': 2.23, 'yf_ticker': 'XOM', 'name': 'Exxon Mobil', 'unrealized_pl': 1.51},
'BIDU': {'weight': 2.18, 'yf_ticker': 'BIDU', 'name': 'Baidu', 'unrealized_pl': 25.5},
'BE': {'weight': 2.15, 'yf_ticker': 'BE', 'name': 'Bloom Energy', 'unrealized_pl': 3.48},
'HAL': {'weight': 2.09, 'yf_ticker': 'HAL', 'name': 'Halliburton', 'unrealized_pl': 13.9},
'NBIS': {'weight': 1.94, 'yf_ticker': 'NBIS', 'name': 'Nebius Group', 'unrealized_pl': 13.9},
'PAAS': {'weight': 1.65, 'yf_ticker': 'PAAS', 'name': 'Pan American Silver', 'unrealized_pl': -3.65},
'DRO': {'weight': 1.52, 'yf_ticker': 'DRO.AX', 'name': 'DroneShield', 'unrealized_pl': 32.7},
'ARG': {'weight': 1.52, 'yf_ticker': 'ARG.TO', 'name': 'Amerigo Resources', 'unrealized_pl': 13.0},
'LAR': {'weight': 1.52, 'yf_ticker': 'LAR', 'name': 'Latin Resources', 'unrealized_pl': 270.4},
'MELI': {'weight': 1.47, 'yf_ticker': 'MELI', 'name': 'MercadoLibre', 'unrealized_pl': -11.5},
'LAC': {'weight': 1.40, 'yf_ticker': 'LAC', 'name': 'Lithium Americas', 'unrealized_pl': 53.6},
'PMET': {'weight': 1.38, 'yf_ticker': 'PMET.TO', 'name': 'Patriot Battery Metals', 'unrealized_pl': 185.9},
'XIAOMI': {'weight': 1.34, 'yf_ticker': '1810.HK', 'name': 'Xiaomi Corp', 'unrealized_pl': -11.0},
'NIO': {'weight': 1.33, 'yf_ticker': 'NIO', 'name': 'NIO Inc (via SWB2)', 'unrealized_pl': -12.0},
'LTR': {'weight': 1.23, 'yf_ticker': 'LTR.AX', 'name': 'Liontown Resources', 'unrealized_pl': 113.8},
'ACG': {'weight': 0.78, 'yf_ticker': 'ACG.L', 'name': 'Abrdn China Growth Fund', 'unrealized_pl': -12.4},
'LKY': {'weight': 0.16, 'yf_ticker': 'LKY.AX', 'name': 'Lakeview Resources', 'unrealized_pl': -74.7},
}
cash_pct = 15.4 # User-specified cash position
# Create DataFrame
portfolio_df = pd.DataFrame([
{'Ticker': k, 'YF_Ticker': v['yf_ticker'], 'Name': v['name'],
'Weight_%': v['weight'], 'Unrealized_PL_%': v['unrealized_pl']}
for k, v in portfolio_raw.items()
])
# Sort by weight
portfolio_df = portfolio_df.sort_values('Weight_%', ascending=False).reset_index(drop=True)
total_equity_weight = portfolio_df['Weight_%'].sum()
print(f"Total equity weight: {total_equity_weight:.2f}%")
print(f"Cash position: {cash_pct:.2f}%")
print(f"Total: {total_equity_weight + cash_pct:.2f}%")
print(f"\nNumber of holdings: {len(portfolio_df)}")
print(f"\n{'='*80}")
portfolio_df[['Ticker', 'Name', 'Weight_%', 'Unrealized_PL_%']]
Total equity weight: 84.54% Cash position: 15.40% Total: 99.94% Number of holdings: 29 ================================================================================
Out[2]:
| Ticker | Name | Weight_% | Unrealized_PL_% | |
|---|---|---|---|---|
| 0 | GOOGL | Alphabet Inc | 8.2300 | 8.4000 |
| 1 | SGLN | WisdomTree Physical Swiss Gold | 7.8000 | 46.5000 |
| 2 | AMZN | Amazon.com | 6.2800 | -6.7100 |
| 3 | KGC | Kinross Gold | 5.5100 | 32.0000 |
| 4 | PLS | Pilbara Minerals | 5.3900 | 66.3000 |
| 5 | BARC | Barclays | 4.8700 | 24.6000 |
| 6 | OKLO | Oklo Inc (Nuclear) | 4.5200 | 5.3900 |
| 7 | RR | Rolls-Royce Holdings | 4.3800 | 149.0000 |
| 8 | LEU | Centrus Energy | 3.5600 | 28.5000 |
| 9 | NVDA | NVIDIA | 3.3200 | 92.7000 |
| 10 | BABA | Alibaba Group | 2.4300 | 6.3200 |
| 11 | WYFI | Xtreme One Entertainment | 2.3600 | 6.9900 |
| 12 | XOM | Exxon Mobil | 2.2300 | 1.5100 |
| 13 | BIDU | Baidu | 2.1800 | 25.5000 |
| 14 | BE | Bloom Energy | 2.1500 | 3.4800 |
| 15 | HAL | Halliburton | 2.0900 | 13.9000 |
| 16 | NBIS | Nebius Group | 1.9400 | 13.9000 |
| 17 | PAAS | Pan American Silver | 1.6500 | -3.6500 |
| 18 | DRO | DroneShield | 1.5200 | 32.7000 |
| 19 | ARG | Amerigo Resources | 1.5200 | 13.0000 |
| 20 | LAR | Latin Resources | 1.5200 | 270.4000 |
| 21 | MELI | MercadoLibre | 1.4700 | -11.5000 |
| 22 | LAC | Lithium Americas | 1.4000 | 53.6000 |
| 23 | PMET | Patriot Battery Metals | 1.3800 | 185.9000 |
| 24 | XIAOMI | Xiaomi Corp | 1.3400 | -11.0000 |
| 25 | NIO | NIO Inc (via SWB2) | 1.3300 | -12.0000 |
| 26 | LTR | Liontown Resources | 1.2300 | 113.8000 |
| 27 | ACG | Abrdn China Growth Fund | 0.7800 | -12.4000 |
| 28 | LKY | Lakeview Resources | 0.1600 | -74.7000 |
InΒ [3]:
# Portfolio composition pie chart
labels = list(portfolio_df['Ticker']) + ['CASH']
values = list(portfolio_df['Weight_%']) + [cash_pct]
fig = go.Figure(data=[go.Pie(
labels=labels, values=values,
hole=0.4,
textinfo='label+percent',
textposition='outside',
marker=dict(line=dict(color='#000000', width=1))
)])
fig.update_layout(
title='Portfolio Composition (% of Net Liquidation Value)',
width=900, height=700,
showlegend=False
)
fig.show()
InΒ [4]:
# Thematic grouping
themes = {
'AI / Tech': ['GOOGL', 'AMZN', 'NVDA', 'BIDU', 'NBIS', 'WYFI'],
'Gold / Silver': ['SGLN', 'KGC', 'PAAS'],
'Nuclear Energy': ['OKLO', 'LEU', 'BE'],
'Lithium / Critical Minerals': ['PLS', 'LAR', 'LAC', 'PMET', 'LTR', 'ARG', 'LKY'],
'China Growth': ['BABA', 'XIAOMI', 'NIO', 'ACG'],
'UK Equities': ['BARC', 'RR'],
'Energy / Oil': ['XOM', 'HAL'],
'LatAm / EM': ['MELI'],
'Defense Tech': ['DRO'],
'Cash': ['CASH']
}
theme_weights = {}
for theme, tickers in themes.items():
if theme == 'Cash':
theme_weights[theme] = cash_pct
else:
w = portfolio_df[portfolio_df['Ticker'].isin(tickers)]['Weight_%'].sum()
theme_weights[theme] = w
theme_df = pd.DataFrame(list(theme_weights.items()), columns=['Theme', 'Weight_%'])
theme_df = theme_df.sort_values('Weight_%', ascending=True)
fig = go.Figure(data=[go.Bar(
y=theme_df['Theme'], x=theme_df['Weight_%'],
orientation='h',
marker_color=['#FFD700' if 'Gold' in t else '#00FF00' if 'Nuclear' in t
else '#FF6347' if 'AI' in t else '#4169E1' if 'Lithium' in t
else '#FF4500' if 'China' in t else '#808080' if 'Cash' in t
else '#9370DB' for t in theme_df['Theme']],
text=[f'{w:.1f}%' for w in theme_df['Weight_%']],
textposition='outside'
)])
fig.update_layout(
title='Portfolio Allocation by Investment Theme',
xaxis_title='Weight (%)',
width=900, height=500,
margin=dict(l=200)
)
fig.show()
InΒ [5]:
# Unrealized P&L by holding
pl_df = portfolio_df.sort_values('Unrealized_PL_%', ascending=True)
colors = ['#FF4136' if x < 0 else '#2ECC40' for x in pl_df['Unrealized_PL_%']]
fig = go.Figure(data=[go.Bar(
y=pl_df['Ticker'], x=pl_df['Unrealized_PL_%'],
orientation='h',
marker_color=colors,
text=[f'{x:.1f}%' for x in pl_df['Unrealized_PL_%']],
textposition='outside'
)])
fig.update_layout(
title='Unrealized P&L by Holding (%)',
xaxis_title='Unrealized P&L (%)',
width=900, height=800,
margin=dict(l=100)
)
fig.show()
2. Historical Data DownloadΒΆ
Downloading 3 years of historical data for portfolio holdings, benchmarks, and lazy portfolio ETFs.
InΒ [6]:
# Download historical price data
yf_tickers = portfolio_df['YF_Ticker'].tolist()
end_date = pd.Timestamp.now()
start_date = end_date - pd.DateOffset(years=3)
print("Downloading portfolio data...")
price_data = yf.download(yf_tickers, start=start_date, end=end_date, progress=False)
# Handle multi-level columns
if isinstance(price_data.columns, pd.MultiIndex):
price_data = price_data['Adj Close'] if 'Adj Close' in price_data.columns.get_level_values(0) else price_data['Close']
price_data = price_data.ffill().dropna(how='all')
# Check data availability
print(f"\nData period: {price_data.index[0].strftime('%Y-%m-%d')} to {price_data.index[-1].strftime('%Y-%m-%d')}")
print(f"Trading days: {len(price_data)}")
# Check for tickers with missing data
missing_pct = price_data.isnull().sum() / len(price_data) * 100
problematic = missing_pct[missing_pct > 20]
if len(problematic) > 0:
print(f"\nβ οΈ Tickers with >20% missing data:")
for t, pct in problematic.items():
print(f" {t}: {pct:.1f}% missing")
# Fill remaining NAs
price_data = price_data.ffill().bfill()
available_tickers = [t for t in yf_tickers if t in price_data.columns]
missing_tickers = [t for t in yf_tickers if t not in price_data.columns]
if missing_tickers:
print(f"\nβ οΈ Could not download data for: {missing_tickers}")
print("These will be excluded from quantitative analysis.")
print(f"\nβ
Successfully downloaded data for {len(available_tickers)} tickers")
Downloading portfolio data...
Data period: 2023-02-08 to 2026-02-06 Trading days: 774 β οΈ Tickers with >20% missing data: LAC: 21.6% missing NBIS: 56.7% missing WYFI: 83.2% missing β Successfully downloaded data for 29 tickers
InΒ [7]:
# Download benchmark data
print("Downloading benchmark data...")
benchmark_tickers = {'S&P 500': 'SPY', 'Nasdaq 100': 'QQQ', 'MSCI World': 'URTH'}
benchmark_data = yf.download(list(benchmark_tickers.values()), start=start_date, end=end_date, progress=False)
if isinstance(benchmark_data.columns, pd.MultiIndex):
benchmark_data = benchmark_data['Adj Close'] if 'Adj Close' in benchmark_data.columns.get_level_values(0) else benchmark_data['Close']
benchmark_data = benchmark_data.ffill().bfill()
print(f"β
Benchmark data downloaded")
# Download lazy portfolio ETFs
print("\nDownloading lazy portfolio ETF data...")
ray_dalio = {'VTI': 0.30, 'TLT': 0.40, 'IEF': 0.15, 'GLD': 0.075, 'DBC': 0.075}
warren_buffett = {'VOO': 0.90, 'BND': 0.10}
sixty_forty = {'VTI': 0.60, 'BND': 0.40}
yale_endowment = {'VTI': 0.30, 'VEA': 0.15, 'VWO': 0.10, 'VNQ': 0.15, 'TLT': 0.15, 'TIP': 0.15}
shiller_cape = {'VTV': 0.25, 'VBR': 0.25, 'VYM': 0.25, 'SCHD': 0.25}
cathie_wood = {'ARKK': 0.30, 'ARKW': 0.25, 'ARKG': 0.20, 'ARKQ': 0.15, 'ARKF': 0.10}
all_lazy_tickers = list(set(
list(ray_dalio.keys()) + list(warren_buffett.keys()) + list(sixty_forty.keys()) +
list(yale_endowment.keys()) + list(shiller_cape.keys()) + list(cathie_wood.keys())
))
lazy_data = yf.download(all_lazy_tickers, start=start_date, end=end_date, progress=False)
if isinstance(lazy_data.columns, pd.MultiIndex):
lazy_data = lazy_data['Adj Close'] if 'Adj Close' in lazy_data.columns.get_level_values(0) else lazy_data['Close']
lazy_data = lazy_data.ffill().bfill()
print(f"β
Lazy portfolio data downloaded")
Downloading benchmark data... β Benchmark data downloaded Downloading lazy portfolio ETF data...
β Lazy portfolio data downloaded
3. Portfolio Returns CalculationΒΆ
InΒ [8]:
# Build weights array aligned with available data
ticker_to_label = {v['yf_ticker']: k for k, v in portfolio_raw.items()}
weight_map = {v['yf_ticker']: v['weight'] / 100.0 for k, v in portfolio_raw.items()}
# Filter to available tickers
weights_series = pd.Series({t: weight_map[t] for t in available_tickers if t in weight_map})
# Normalize weights (exclude cash, normalize equity portion)
equity_total = weights_series.sum()
weights_normalized = weights_series / equity_total * (1 - cash_pct/100)
print(f"Equity allocation: {(1 - cash_pct/100)*100:.1f}%")
print(f"Cash allocation: {cash_pct:.1f}%")
print(f"Sum of normalized weights: {weights_normalized.sum():.4f}")
# Calculate portfolio returns
returns_data = price_data[available_tickers].pct_change().dropna()
# Align weights with returns columns
aligned_weights = weights_normalized.reindex(returns_data.columns).fillna(0)
portfolio_returns = (returns_data * aligned_weights).sum(axis=1)
# Add cash return (assume risk-free rate ~4.5% annual = ~0.018% daily)
daily_rf = 0.045 / 252
portfolio_returns = portfolio_returns + (cash_pct / 100) * daily_rf
print(f"\nPortfolio returns calculated: {len(portfolio_returns)} trading days")
print(f"Annualized return: {((1 + portfolio_returns.mean())**252 - 1)*100:.2f}%")
print(f"Annualized volatility: {portfolio_returns.std() * np.sqrt(252) * 100:.2f}%")
Equity allocation: 84.6% Cash allocation: 15.4% Sum of normalized weights: 0.8460 Portfolio returns calculated: 773 trading days Annualized return: 47.00% Annualized volatility: 17.92%
InΒ [9]:
# Use only tickers with sufficient data for optimization
opt_data = price_data[available_tickers].dropna(axis=1, how='any')
opt_tickers = opt_data.columns.tolist()
opt_weights = weights_normalized.reindex(opt_tickers).fillna(0)
opt_weights = opt_weights / opt_weights.sum() # Re-normalize for optimization
# Calculate expected returns and covariance
mu = expected_returns.mean_historical_return(opt_data)
S = risk_models.sample_cov(opt_data)
print("Expected Annual Returns:")
for t in opt_tickers:
label = ticker_to_label.get(t, t)
print(f" {label:8s} ({t:10s}): {mu[t]*100:>8.2f}%")
Expected Annual Returns: GOOGL (GOOGL ): 49.37% SGLN (SGLN.L ): -70.66% AMZN (AMZN ): 28.17% KGC (KGC ): 96.35% PLS (PLS.AX ): -4.00% BARC (BARC.L ): 35.31% OKLO (OKLO ): 88.87% RR (RR.L ): 120.15% LEU (LEU ): 83.94% NVDA (NVDA ): 99.42% BABA (BABA ): 16.03% WYFI (WYFI ): 4.50% XOM (XOM ): 12.75% BIDU (BIDU ): -0.47% BE (BE ): 79.83% HAL (HAL ): -0.19% NBIS (NBIS ): 60.94% PAAS (PAAS ): 51.03% DRO (DRO.AX ): 101.13% ARG (ARG.TO ): 68.82% LAR (LAR ): -13.19% MELI (MELI ): 19.40% LAC (LAC ): -22.36% PMET (PMET.TO ): -25.86% XIAOMI (1810.HK ): 40.27% NIO (NIO ): -21.21% LTR (LTR.AX ): 3.12% ACG (ACG.L ): 14.50% LKY (LKY.AX ): 32.35%
InΒ [10]:
# Efficient Frontier Plot
fig, ax = plt.subplots(figsize=(12, 8))
# Plot efficient frontier
ef_plot = EfficientFrontier(mu, S)
try:
plotting.plot_efficient_frontier(ef_plot, ax=ax, show_assets=True)
except Exception as e:
# Manual efficient frontier
n_points = 100
target_returns = np.linspace(mu.min(), mu.max(), n_points)
frontier_vols = []
frontier_rets = []
for target_ret in target_returns:
try:
ef_temp = EfficientFrontier(mu, S)
ef_temp.efficient_return(target_ret)
ret, vol, _ = ef_temp.portfolio_performance()
frontier_vols.append(vol)
frontier_rets.append(ret)
except:
pass
ax.plot(frontier_vols, frontier_rets, 'b-', linewidth=2, label='Efficient Frontier')
# Current portfolio position
try:
ef_current = EfficientFrontier(mu, S)
ef_current.set_weights(opt_weights.to_dict())
current_ret, current_vol, current_sharpe = ef_current.portfolio_performance()
ax.scatter(current_vol, current_ret, marker='*', s=500, c='red', zorder=5, label=f'Current Portfolio (Sharpe: {current_sharpe:.2f})')
print(f"Current Portfolio: Return={current_ret*100:.2f}%, Vol={current_vol*100:.2f}%, Sharpe={current_sharpe:.2f}")
except Exception as e:
print(f"Could not plot current portfolio: {e}")
# Max Sharpe portfolio
try:
ef_sharpe = EfficientFrontier(mu, S)
weights_sharpe = ef_sharpe.max_sharpe()
sharpe_ret, sharpe_vol, sharpe_ratio = ef_sharpe.portfolio_performance()
ax.scatter(sharpe_vol, sharpe_ret, marker='D', s=200, c='green', zorder=5, label=f'Max Sharpe (Sharpe: {sharpe_ratio:.2f})')
print(f"Max Sharpe: Return={sharpe_ret*100:.2f}%, Vol={sharpe_vol*100:.2f}%, Sharpe={sharpe_ratio:.2f}")
except Exception as e:
print(f"Max Sharpe optimization failed: {e}")
# Min Vol portfolio
try:
ef_minvol = EfficientFrontier(mu, S)
weights_minvol = ef_minvol.min_volatility()
minvol_ret, minvol_vol, minvol_sharpe = ef_minvol.portfolio_performance()
ax.scatter(minvol_vol, minvol_ret, marker='^', s=200, c='orange', zorder=5, label=f'Min Volatility (Sharpe: {minvol_sharpe:.2f})')
print(f"Min Vol: Return={minvol_ret*100:.2f}%, Vol={minvol_vol*100:.2f}%, Sharpe={minvol_sharpe:.2f}")
except Exception as e:
print(f"Min Vol optimization failed: {e}")
ax.set_title('Efficient Frontier with Current Portfolio', fontsize=14)
ax.legend(fontsize=10)
plt.tight_layout()
plt.savefig('efficient_frontier.png', dpi=150, bbox_inches='tight')
plt.show()
Current Portfolio: Return=35.09%, Vol=21.19%, Sharpe=1.66 Max Sharpe: Return=89.78%, Vol=20.82%, Sharpe=4.31 Min Vol: Return=30.13%, Vol=13.72%, Sharpe=2.20
InΒ [11]:
# Optimal portfolio weights comparison
try:
ef_s = EfficientFrontier(mu, S)
w_sharpe = ef_s.max_sharpe()
cleaned_sharpe = ef_s.clean_weights()
ef_m = EfficientFrontier(mu, S)
w_minvol = ef_m.min_volatility()
cleaned_minvol = ef_m.clean_weights()
comparison_df = pd.DataFrame({
'Current': opt_weights,
'Max Sharpe': pd.Series(cleaned_sharpe),
'Min Volatility': pd.Series(cleaned_minvol)
}).fillna(0)
# Rename index to readable labels
comparison_df.index = [ticker_to_label.get(t, t) for t in comparison_df.index]
# Filter to show only rows with non-zero weights
mask = (comparison_df != 0).any(axis=1)
display_df = comparison_df[mask].sort_values('Current', ascending=False)
# Format as percentages
display_pct = display_df.map(lambda x: f'{x*100:.1f}%')
print("Portfolio Weights Comparison:")
print("="*60)
print(display_pct.to_string())
# Plot comparison
fig = go.Figure()
for col in display_df.columns:
fig.add_trace(go.Bar(name=col, x=display_df.index, y=display_df[col]*100))
fig.update_layout(
barmode='group',
title='Portfolio Weights: Current vs Optimal',
yaxis_title='Weight (%)',
width=1000, height=500
)
fig.show()
except Exception as e:
print(f"Optimization comparison failed: {e}")
Portfolio Weights Comparison:
============================================================
Current Max Sharpe Min Volatility
GOOGL 9.7% 13.2% 13.0%
SGLN 9.2% 0.0% 4.9%
AMZN 7.4% 0.0% 7.2%
KGC 6.5% 22.2% 4.7%
PLS 6.4% 0.0% 3.7%
BARC 5.8% 0.0% 7.6%
OKLO 5.3% 0.0% 0.0%
RR 5.2% 36.4% 6.8%
LEU 4.2% 0.0% 0.0%
NVDA 3.9% 8.1% 0.0%
BABA 2.9% 0.0% 0.0%
WYFI 2.8% 0.0% 4.8%
XOM 2.6% 5.9% 32.6%
BIDU 2.6% 0.0% 0.0%
BE 2.5% 0.0% 0.0%
HAL 2.5% 0.0% 0.0%
NBIS 2.3% 0.0% 0.0%
PAAS 2.0% 0.0% 0.0%
DRO 1.8% 4.8% 1.4%
ARG 1.8% 2.5% 0.0%
LAR 1.8% 0.0% 0.0%
MELI 1.7% 0.0% 3.9%
LAC 1.7% 0.0% 0.0%
PMET 1.6% 0.0% 0.0%
XIAOMI 1.6% 6.8% 7.5%
NIO 1.6% 0.0% 0.0%
LTR 1.5% 0.0% 0.0%
ACG 0.9% 0.0% 1.7%
LKY 0.2% 0.0% 0.1%
InΒ [12]:
# Discrete Allocation for Max Sharpe Portfolio
try:
latest_prices = get_latest_prices(opt_data)
portfolio_value = 140000 # Approximate portfolio value in GBP
da = DiscreteAllocation(cleaned_sharpe, latest_prices, total_portfolio_value=portfolio_value)
allocation, leftover = da.greedy_portfolio()
print("Optimal Share Allocation (Max Sharpe) for ~Β£140,000 portfolio:")
print("="*60)
for ticker, shares in sorted(allocation.items(), key=lambda x: x[1], reverse=True):
label = ticker_to_label.get(ticker, ticker)
price = latest_prices[ticker]
value = shares * price
print(f" {label:8s} ({ticker:10s}): {shares:>4d} shares @ {price:>10.2f} = {value:>12.2f}")
print(f"\n Leftover cash: Β£{leftover:,.2f}")
except Exception as e:
print(f"Discrete allocation failed: {e}")
Optimal Share Allocation (Max Sharpe) for ~Β£140,000 portfolio: ============================================================ DRO (DRO.AX ): 2308 shares @ 2.90 = 6693.20 KGC (KGC ): 969 shares @ 32.09 = 31095.21 ARG (ARG.TO ): 628 shares @ 5.56 = 3491.68 XIAOMI (1810.HK ): 272 shares @ 35.18 = 9568.96 NVDA (NVDA ): 62 shares @ 185.41 = 11495.42 GOOGL (GOOGL ): 58 shares @ 322.86 = 18725.88 XOM (XOM ): 56 shares @ 149.05 = 8346.80 RR (RR.L ): 41 shares @ 1229.00 = 50389.00 Leftover cash: Β£193.85
InΒ [13]:
# Riskfolio-Lib Analysis
returns_rf = opt_data.pct_change().dropna()
# Create Portfolio object
port = rp.Portfolio(returns=returns_rf)
port.assets_stats(method_mu='hist', method_cov='hist')
# Current weights as array
w_current = opt_weights.reindex(opt_data.columns).fillna(0).values.reshape(-1, 1)
print("Advanced Risk Metrics for Current Portfolio")
print("="*60)
# Calculate various risk measures manually
port_ret = (returns_rf * w_current.flatten()).sum(axis=1)
annual_ret = port_ret.mean() * 252
annual_vol = port_ret.std() * np.sqrt(252)
sharpe = annual_ret / annual_vol if annual_vol > 0 else 0
# VaR and CVaR
var_95 = np.percentile(port_ret, 5)
cvar_95 = port_ret[port_ret <= var_95].mean()
# Max Drawdown
cum_ret = (1 + port_ret).cumprod()
running_max = cum_ret.cummax()
drawdown = (cum_ret - running_max) / running_max
max_dd = drawdown.min()
# Ulcer Index
ulcer_index = np.sqrt(np.mean(drawdown**2))
# Calmar Ratio
calmar = annual_ret / abs(max_dd) if max_dd != 0 else 0
# Sortino Ratio
downside_ret = port_ret[port_ret < 0]
downside_vol = downside_ret.std() * np.sqrt(252)
sortino = annual_ret / downside_vol if downside_vol > 0 else 0
metrics = {
'Annual Return': f'{annual_ret*100:.2f}%',
'Annual Volatility': f'{annual_vol*100:.2f}%',
'Sharpe Ratio': f'{sharpe:.3f}',
'Sortino Ratio': f'{sortino:.3f}',
'Calmar Ratio': f'{calmar:.3f}',
'VaR (95%)': f'{var_95*100:.3f}%',
'CVaR (95%)': f'{cvar_95*100:.3f}%',
'Max Drawdown': f'{max_dd*100:.2f}%',
'Ulcer Index': f'{ulcer_index:.4f}',
'Skewness': f'{stats.skew(port_ret):.3f}',
'Kurtosis': f'{stats.kurtosis(port_ret):.3f}',
}
for k, v in metrics.items():
print(f" {k:25s}: {v}")
Advanced Risk Metrics for Current Portfolio ============================================================ Annual Return : 44.75% Annual Volatility : 21.19% Sharpe Ratio : 2.112 Sortino Ratio : 2.912 Calmar Ratio : 2.131 VaR (95%) : -1.988% CVaR (95%) : -2.999% Max Drawdown : -21.00% Ulcer Index : 0.0443 Skewness : -0.294 Kurtosis : 3.690
InΒ [14]:
# Risk Contribution Analysis
try:
# Calculate marginal risk contribution
cov_matrix = returns_rf.cov() * 252
w_flat = w_current.flatten()
port_vol = np.sqrt(w_flat @ cov_matrix.values @ w_flat)
marginal_contrib = (cov_matrix.values @ w_flat) / port_vol
risk_contrib = w_flat * marginal_contrib
risk_contrib_pct = risk_contrib / risk_contrib.sum() * 100
rc_df = pd.DataFrame({
'Ticker': [ticker_to_label.get(t, t) for t in opt_data.columns],
'Weight_%': w_flat * 100,
'Risk_Contribution_%': risk_contrib_pct
}).sort_values('Risk_Contribution_%', ascending=False)
rc_df = rc_df[rc_df['Weight_%'] > 0.1]
fig = make_subplots(rows=1, cols=2, subplot_titles=('Portfolio Weight', 'Risk Contribution'))
fig.add_trace(go.Bar(x=rc_df['Ticker'], y=rc_df['Weight_%'], name='Weight', marker_color='steelblue'), row=1, col=1)
fig.add_trace(go.Bar(x=rc_df['Ticker'], y=rc_df['Risk_Contribution_%'], name='Risk Contribution', marker_color='coral'), row=1, col=2)
fig.update_layout(title='Weight vs Risk Contribution by Asset', width=1100, height=500, showlegend=False)
fig.show()
# Show over/under-risked assets
rc_df['Risk_Weight_Ratio'] = rc_df['Risk_Contribution_%'] / rc_df['Weight_%']
print("\nRisk/Weight Ratio (>1 = contributing more risk than weight suggests):")
print("="*70)
for _, row in rc_df.sort_values('Risk_Weight_Ratio', ascending=False).iterrows():
flag = 'π΄' if row['Risk_Weight_Ratio'] > 1.5 else 'π‘' if row['Risk_Weight_Ratio'] > 1.0 else 'π’'
print(f" {flag} {row['Ticker']:8s}: Weight={row['Weight_%']:.1f}%, Risk={row['Risk_Contribution_%']:.1f}%, Ratio={row['Risk_Weight_Ratio']:.2f}")
except Exception as e:
print(f"Risk contribution analysis failed: {e}")
Risk/Weight Ratio (>1 = contributing more risk than weight suggests): ====================================================================== π΄ OKLO : Weight=5.3%, Risk=15.4%, Ratio=2.89 π΄ LEU : Weight=4.2%, Risk=9.3%, Ratio=2.20 π΄ BE : Weight=2.5%, Risk=4.6%, Ratio=1.80 π΄ LAR : Weight=1.8%, Risk=3.0%, Ratio=1.65 π΄ PMET : Weight=1.6%, Risk=2.7%, Ratio=1.64 π΄ NBIS : Weight=2.3%, Risk=3.6%, Ratio=1.58 π΄ LAC : Weight=1.7%, Risk=2.6%, Ratio=1.57 π‘ LTR : Weight=1.5%, Risk=1.7%, Ratio=1.15 π‘ NIO : Weight=1.6%, Risk=1.8%, Ratio=1.14 π‘ NVDA : Weight=3.9%, Risk=4.3%, Ratio=1.10 π‘ PAAS : Weight=2.0%, Risk=2.1%, Ratio=1.08 π’ SGLN : Weight=9.2%, Risk=8.3%, Ratio=0.90 π’ KGC : Weight=6.5%, Risk=5.8%, Ratio=0.89 π’ BABA : Weight=2.9%, Risk=2.5%, Ratio=0.87 π’ BIDU : Weight=2.6%, Risk=2.2%, Ratio=0.86 π’ ARG : Weight=1.8%, Risk=1.5%, Ratio=0.85 π’ PLS : Weight=6.4%, Risk=5.1%, Ratio=0.79 π’ WYFI : Weight=2.8%, Risk=2.1%, Ratio=0.74 π’ AMZN : Weight=7.4%, Risk=5.0%, Ratio=0.68 π’ LKY : Weight=0.2%, Risk=0.1%, Ratio=0.65 π’ GOOGL : Weight=9.7%, Risk=6.2%, Ratio=0.64 π’ MELI : Weight=1.7%, Risk=1.0%, Ratio=0.55 π’ HAL : Weight=2.5%, Risk=1.3%, Ratio=0.54 π’ RR : Weight=5.2%, Risk=2.8%, Ratio=0.53 π’ DRO : Weight=1.8%, Risk=0.9%, Ratio=0.52 π’ BARC : Weight=5.8%, Risk=2.6%, Ratio=0.45 π’ XIAOMI : Weight=1.6%, Risk=0.6%, Ratio=0.40 π’ ACG : Weight=0.9%, Risk=0.3%, Ratio=0.32 π’ XOM : Weight=2.6%, Risk=0.6%, Ratio=0.22
InΒ [15]:
# Hierarchical Risk Parity (HRP)
try:
port_hrp = rp.Portfolio(returns=returns_rf)
port_hrp.assets_stats(method_mu='hist', method_cov='hist')
w_hrp = port_hrp.optimization(model='HRP', rm='MV', rf=0.0)
if w_hrp is not None:
hrp_comparison = pd.DataFrame({
'Current': pd.Series(w_flat, index=opt_data.columns),
'HRP': w_hrp.iloc[:, 0]
})
hrp_comparison.index = [ticker_to_label.get(t, t) for t in hrp_comparison.index]
# Filter significant weights
mask = (hrp_comparison > 0.01).any(axis=1)
hrp_display = hrp_comparison[mask].sort_values('Current', ascending=False)
fig = go.Figure()
fig.add_trace(go.Bar(name='Current', x=hrp_display.index, y=hrp_display['Current']*100))
fig.add_trace(go.Bar(name='HRP Optimal', x=hrp_display.index, y=hrp_display['HRP']*100))
fig.update_layout(
barmode='group',
title='Current Allocation vs HRP Optimal',
yaxis_title='Weight (%)',
width=1000, height=500
)
fig.show()
print("Top HRP Recommendations:")
print("="*60)
diff = hrp_comparison['HRP'] - hrp_comparison['Current']
for idx in diff.abs().sort_values(ascending=False).head(10).index:
action = "β¬οΈ INCREASE" if diff[idx] > 0 else "β¬οΈ DECREASE"
print(f" {action} {idx:8s}: {diff[idx]*100:+.1f}pp (Current: {hrp_comparison.loc[idx, 'Current']*100:.1f}% β HRP: {hrp_comparison.loc[idx, 'HRP']*100:.1f}%)")
else:
print("HRP optimization did not converge")
except Exception as e:
print(f"HRP analysis failed: {e}")
HRP analysis failed: loop of ufunc does not support argument 0 of type NoneType which has no callable exp method
InΒ [16]:
# QuantStats metrics
print("Comprehensive Performance Metrics")
print("="*60)
try:
qs.reports.metrics(portfolio_returns, mode='full', display=True)
except Exception as e:
# Manual metrics
def sf(v):
if hasattr(v, 'item'): return v.item()
if hasattr(v, 'iloc'): return float(v.iloc[0])
return float(v)
print(f" CAGR: {sf(qs.stats.cagr(portfolio_returns))*100:.2f}%")
print(f" Sharpe: {sf(qs.stats.sharpe(portfolio_returns)):.3f}")
print(f" Sortino: {sf(qs.stats.sortino(portfolio_returns)):.3f}")
print(f" Max Drawdown: {sf(qs.stats.max_drawdown(portfolio_returns))*100:.2f}%")
print(f" Volatility: {sf(qs.stats.volatility(portfolio_returns))*100:.2f}%")
print(f" Win Rate: {sf(qs.stats.win_rate(portfolio_returns))*100:.1f}%")
print(f" Best Day: {portfolio_returns.max()*100:.2f}%")
print(f" Worst Day: {portfolio_returns.min()*100:.2f}%")
Comprehensive Performance Metrics ============================================================
Parameter Value
-------------- -------
Risk-Free Rate 0.0%
Periods/Year 252
Compounded Yes
Match Dates Yes
Strategy
------------------------- ----------
Start Period 2023-02-09
End Period 2026-02-06
Risk-Free Rate 0.0%
Time in Market 100.0%
Cumulative Return 210.36%
CAGRοΉͺ 44.66%
Sharpe 2.15
Prob. Sharpe Ratio 99.99%
Smart Sharpe 1.95
Sortino 3.26
Smart Sortino 2.95
Sortino/β2 2.3
Smart Sortino/β2 2.09
Omega 1.45
Max Drawdown -17.92%
Max DD Date 2025-04-08
Max DD Period Start 2025-02-18
Max DD Period End 2025-05-13
Longest DD Days 120
Volatility (ann.) 17.92%
Calmar 2.49
Skew -0.29
Kurtosis 3.72
Ulcer Performance Index 56.64
Risk-Adjusted Return 44.66%
Risk-Return Ratio 0.14
Avg. Return 0.15%
Avg. Win 0.87%
Avg. Loss -0.79%
Win/Loss Ratio 1.1
Profit Ratio 0.83
Expected Daily % 0.15%
Expected Monthly % 3.11%
Expected Yearly % 32.73%
Kelly Criterion 17.68%
Risk of Ruin 0.0%
Daily Value-at-Risk -1.7%
Expected Shortfall (cVaR) -2.58%
Max Consecutive Wins 12
Max Consecutive Losses 6
Gain/Pain Ratio 0.45
Gain/Pain (1M) 4.33
Payoff Ratio 1.1
Profit Factor 1.45
Common Sense Ratio 1.64
CPC Index 0.91
Tail Ratio 1.13
Outlier Win Ratio 3.36
Outlier Loss Ratio 3.71
MTD -2.36%
3M 0.53%
6M 36.58%
YTD -0.69%
1Y 75.23%
3Y (ann.) 45.95%
5Y (ann.) 44.66%
10Y (ann.) 44.66%
All-time (ann.) 44.66%
Best Day 5.81%
Worst Day -6.3%
Best Month 17.75%
Worst Month -4.88%
Best Year 98.42%
Worst Year -0.69%
Avg. Drawdown -2.47%
Avg. Drawdown Days 13
Recovery Factor 6.6
Ulcer Index 0.04
Serenity Index 3.57
Avg. Up Month 5.92%
Avg. Down Month -2.35%
Win Days % 56.92%
Win Month % 67.57%
Win Quarter % 92.31%
Win Year % 75.0%
InΒ [17]:
# Generate QuantStats HTML tearsheet
try:
benchmark_spy = benchmark_data['SPY'].pct_change().dropna()
# Align dates
common_idx = portfolio_returns.index.intersection(benchmark_spy.index)
qs.reports.html(
portfolio_returns.loc[common_idx],
benchmark=benchmark_spy.loc[common_idx],
output='portfolio_tearsheet.html',
title="Ferhat's Portfolio Analysis"
)
print("β
QuantStats tearsheet saved to portfolio_tearsheet.html")
except Exception as e:
print(f"Tearsheet generation failed: {e}")
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
findfont: Font family 'Arial' not found.
β QuantStats tearsheet saved to portfolio_tearsheet.html
InΒ [18]:
# Monthly Returns Heatmap
monthly_returns = portfolio_returns.resample('ME').apply(lambda x: (1 + x).prod() - 1)
# Create pivot table
monthly_df = pd.DataFrame({
'Year': monthly_returns.index.year,
'Month': monthly_returns.index.month,
'Return': monthly_returns.values * 100
})
monthly_pivot = monthly_df.pivot_table(index='Year', columns='Month', values='Return')
monthly_pivot.columns = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
fig = go.Figure(data=go.Heatmap(
z=monthly_pivot.values,
x=monthly_pivot.columns,
y=monthly_pivot.index.astype(str),
colorscale='RdYlGn',
zmid=0,
text=np.round(monthly_pivot.values, 1),
texttemplate='%{text:.1f}%',
textfont={"size": 11},
hoverongaps=False
))
fig.update_layout(
title='Monthly Returns Heatmap (%)',
width=900, height=400
)
fig.show()
InΒ [19]:
# Drawdown Analysis
cum_returns = (1 + portfolio_returns).cumprod()
running_max = cum_returns.cummax()
drawdown = (cum_returns - running_max) / running_max * 100
fig = go.Figure()
fig.add_trace(go.Scatter(
x=drawdown.index, y=drawdown.values,
fill='tozeroy',
fillcolor='rgba(255,0,0,0.2)',
line=dict(color='red', width=1),
name='Drawdown'
))
fig.update_layout(
title='Portfolio Drawdown (%)',
xaxis_title='Date',
yaxis_title='Drawdown (%)',
width=1000, height=400
)
fig.show()
# Top 5 drawdowns
print("\nTop 5 Drawdown Periods:")
print("="*60)
dd_series = drawdown
is_dd = dd_series < 0
dd_periods = []
in_dd = False
start = None
for i in range(len(dd_series)):
if dd_series.iloc[i] < 0 and not in_dd:
in_dd = True
start = dd_series.index[i]
elif dd_series.iloc[i] >= 0 and in_dd:
in_dd = False
period_dd = dd_series.loc[start:dd_series.index[i]]
dd_periods.append({
'Start': start,
'End': dd_series.index[i],
'Max_DD': period_dd.min(),
'Duration': (dd_series.index[i] - start).days
})
dd_periods.sort(key=lambda x: x['Max_DD'])
for i, p in enumerate(dd_periods[:5]):
print(f" {i+1}. {p['Start'].strftime('%Y-%m-%d')} to {p['End'].strftime('%Y-%m-%d')}: {p['Max_DD']:.2f}% ({p['Duration']} days)")
Top 5 Drawdown Periods: ============================================================ 1. 2025-02-18 to 2025-05-14: -17.92% (85 days) 2. 2024-05-29 to 2024-09-26: -11.46% (120 days) 3. 2023-09-18 to 2023-12-19: -8.10% (92 days) 4. 2025-10-27 to 2025-12-10: -6.98% (44 days) 5. 2025-10-16 to 2025-10-24: -5.85% (8 days)
InΒ [20]:
# Cumulative Returns vs Benchmarks
fig = go.Figure()
# Portfolio
cum_port = (1 + portfolio_returns).cumprod()
fig.add_trace(go.Scatter(x=cum_port.index, y=cum_port, name='Your Portfolio',
line=dict(width=3, color='red')))
# Benchmarks
colors = {'SPY': 'blue', 'QQQ': 'green', 'URTH': 'purple'}
names = {'SPY': 'S&P 500', 'QQQ': 'Nasdaq 100', 'URTH': 'MSCI World'}
for ticker in ['SPY', 'QQQ', 'URTH']:
if ticker in benchmark_data.columns:
bm_ret = benchmark_data[ticker].pct_change().dropna()
common = cum_port.index.intersection(bm_ret.index)
cum_bm = (1 + bm_ret.loc[common]).cumprod()
fig.add_trace(go.Scatter(x=cum_bm.index, y=cum_bm, name=names[ticker],
line=dict(width=2, color=colors[ticker])))
fig.update_layout(
title='Cumulative Returns: Your Portfolio vs Major Benchmarks',
xaxis_title='Date', yaxis_title='Cumulative Return (1 = starting value)',
width=1000, height=500,
hovermode='x unified',
legend=dict(x=0.02, y=0.98)
)
fig.show()
InΒ [21]:
# Benchmark Comparison Table
print("Benchmark Comparison")
print("="*80)
def calc_metrics(returns_series):
def sf(v):
if hasattr(v, 'item'):
return v.item()
if hasattr(v, 'iloc'):
return float(v.iloc[0])
return float(v)
return {
'CAGR': sf(qs.stats.cagr(returns_series)),
'Volatility': sf(qs.stats.volatility(returns_series)),
'Sharpe': sf(qs.stats.sharpe(returns_series)),
'Sortino': sf(qs.stats.sortino(returns_series)),
'Max Drawdown': sf(qs.stats.max_drawdown(returns_series)),
'Win Rate': sf(qs.stats.win_rate(returns_series)),
'Best Day': float(returns_series.max()),
'Worst Day': float(returns_series.min())
}
comparison_results = {'Your Portfolio': calc_metrics(portfolio_returns)}
for ticker in ['SPY', 'QQQ', 'URTH']:
if ticker in benchmark_data.columns:
bm_ret = benchmark_data[ticker].pct_change().dropna()
comparison_results[names[ticker]] = calc_metrics(bm_ret)
comp_df = pd.DataFrame(comparison_results)
# Format
for col in comp_df.columns:
comp_df[col] = comp_df[col].apply(lambda x: f'{x*100:.2f}%')
print(comp_df.to_string())
Benchmark Comparison ================================================================================
Your Portfolio S&P 500 Nasdaq 100 MSCI World CAGR 44.66% 21.03% 27.43% 19.74% Volatility 17.92% 15.19% 19.65% 14.12% Sharpe 215.11% 133.24% 133.17% 134.64% Sortino 325.77% 199.02% 197.89% 200.20% Max Drawdown -17.92% -18.76% -22.77% -16.94% Win Rate 56.92% 57.28% 57.47% 56.70% Best Day 5.81% 10.50% 12.00% 8.94% Worst Day -6.30% -5.85% -6.21% -6.09%
InΒ [22]:
# Rolling 30-day and 90-day performance comparison
fig = make_subplots(rows=2, cols=1, subplot_titles=('Rolling 30-Day Returns', 'Rolling 90-Day Returns'))
for window, row in [(30, 1), (90, 2)]:
roll_port = portfolio_returns.rolling(window).apply(lambda x: (1+x).prod()-1) * 100
fig.add_trace(go.Scatter(x=roll_port.index, y=roll_port, name=f'Portfolio ({window}d)',
line=dict(color='red', width=2)), row=row, col=1)
for ticker in ['SPY', 'QQQ']:
if ticker in benchmark_data.columns:
bm_ret = benchmark_data[ticker].pct_change().dropna()
roll_bm = bm_ret.rolling(window).apply(lambda x: (1+x).prod()-1) * 100
fig.add_trace(go.Scatter(x=roll_bm.index, y=roll_bm, name=f'{names[ticker]} ({window}d)',
line=dict(width=1)), row=row, col=1)
fig.update_layout(title='Rolling Performance Comparison', width=1000, height=700, hovermode='x unified')
fig.show()
InΒ [23]:
# Calculate lazy portfolio returns
def calculate_lazy_returns(data, weights_dict):
available = [t for t in weights_dict.keys() if t in data.columns]
if not available:
return None
w = {t: weights_dict[t] for t in available}
total = sum(w.values())
w = {t: v/total for t, v in w.items()}
aligned = data[available]
rets = aligned.pct_change().dropna()
return (rets * pd.Series(w)).sum(axis=1)
lazy_portfolios = {
'Ray Dalio All Weather': calculate_lazy_returns(lazy_data, ray_dalio),
'Warren Buffett 90/10': calculate_lazy_returns(lazy_data, warren_buffett),
'60/40 Stock/Bond': calculate_lazy_returns(lazy_data, sixty_forty),
'Yale Endowment': calculate_lazy_returns(lazy_data, yale_endowment),
'Shiller CAPE Value': calculate_lazy_returns(lazy_data, shiller_cape),
'Cathie Wood ARK': calculate_lazy_returns(lazy_data, cathie_wood)
}
# Remove any that failed
lazy_portfolios = {k: v for k, v in lazy_portfolios.items() if v is not None}
# Cumulative returns chart
fig = go.Figure()
# Your portfolio (bold red)
cum_port = (1 + portfolio_returns).cumprod()
fig.add_trace(go.Scatter(x=cum_port.index, y=cum_port, name='Your Portfolio',
line=dict(width=3, color='red')))
# Lazy portfolios
colors_lazy = ['#1f77b4', '#ff7f0e', '#2ca02c', '#9467bd', '#8c564b', '#e377c2']
for i, (name, rets) in enumerate(lazy_portfolios.items()):
common = cum_port.index.intersection(rets.index)
cum_lazy = (1 + rets.loc[common]).cumprod()
fig.add_trace(go.Scatter(x=cum_lazy.index, y=cum_lazy, name=name,
line=dict(width=1.5, color=colors_lazy[i % len(colors_lazy)])))
fig.update_layout(
title='Cumulative Returns: Your Portfolio vs Famous Lazy Portfolios',
xaxis_title='Date', yaxis_title='Cumulative Return',
width=1000, height=500,
hovermode='x unified',
legend=dict(x=0.02, y=0.98)
)
fig.show()
InΒ [24]:
# Risk-Return Scatter Plot
risk_return_data = []
# Your portfolio
risk_return_data.append({
'Portfolio': 'Your Portfolio',
'Return (CAGR)': qs.stats.cagr(portfolio_returns) * 100,
'Risk (Volatility)': qs.stats.volatility(portfolio_returns) * 100,
'Sharpe': qs.stats.sharpe(portfolio_returns),
'Max Drawdown': qs.stats.max_drawdown(portfolio_returns) * 100
})
# Lazy portfolios
for name, rets in lazy_portfolios.items():
risk_return_data.append({
'Portfolio': name,
'Return (CAGR)': qs.stats.cagr(rets) * 100,
'Risk (Volatility)': qs.stats.volatility(rets) * 100,
'Sharpe': qs.stats.sharpe(rets),
'Max Drawdown': qs.stats.max_drawdown(rets) * 100
})
# Benchmarks
for ticker in ['SPY', 'QQQ', 'URTH']:
if ticker in benchmark_data.columns:
bm_ret = benchmark_data[ticker].pct_change().dropna()
risk_return_data.append({
'Portfolio': names[ticker],
'Return (CAGR)': qs.stats.cagr(bm_ret) * 100,
'Risk (Volatility)': qs.stats.volatility(bm_ret) * 100,
'Sharpe': qs.stats.sharpe(bm_ret),
'Max Drawdown': qs.stats.max_drawdown(bm_ret) * 100
})
rr_df = pd.DataFrame(risk_return_data)
fig = go.Figure()
for _, row in rr_df.iterrows():
is_yours = row['Portfolio'] == 'Your Portfolio'
fig.add_trace(go.Scatter(
x=[row['Risk (Volatility)']],
y=[row['Return (CAGR)']],
mode='markers+text',
name=row['Portfolio'],
marker=dict(
size=20 if is_yours else 12,
color='red' if is_yours else 'steelblue',
symbol='star' if is_yours else 'circle',
line=dict(width=2, color='black') if is_yours else dict(width=1, color='white')
),
text=row['Portfolio'],
textposition='top center',
textfont=dict(size=10)
))
fig.update_layout(
title='Risk vs Return: Your Portfolio vs All Comparisons',
xaxis_title='Risk (Annual Volatility %)',
yaxis_title='Return (CAGR %)',
width=1000, height=600,
showlegend=False
)
fig.show()
# Print table
print("Risk-Return Summary Table")
print("="*90)
print(rr_df.to_string(index=False, float_format='{:.2f}'.format))
Risk-Return Summary Table
==========================================================================================
Portfolio Return (CAGR) Risk (Volatility) Sharpe Max Drawdown
Your Portfolio 44.66 17.92 returns 2.1511
dtype: float64 -17.92
Ray Dalio All Weather 9.01 9.21 0.98 -10.88
Warren Buffett 90/10 19.45 13.39 1.39 -16.87
60/40 Stock/Bond 13.95 9.76 1.39 -11.75
Yale Endowment 11.59 10.52 1.09 -11.30
Shiller CAPE Value 14.33 13.89 1.03 -17.07
Cathie Wood ARK 22.80 34.85 0.76 -35.61
S&P 500 21.03 15.19 1.33 -18.76
Nasdaq 100 27.43 19.65 1.33 -22.77
MSCI World 19.74 14.12 1.35 -16.94
InΒ [25]:
# Comprehensive comparison table (all portfolios)
all_comparison = {}
all_comparison['Your Portfolio'] = calc_metrics(portfolio_returns)
for name, rets in lazy_portfolios.items():
all_comparison[name] = calc_metrics(rets)
all_comp_df = pd.DataFrame(all_comparison)
for col in all_comp_df.columns:
all_comp_df[col] = all_comp_df[col].apply(lambda x: f'{x*100:.2f}%')
print("Complete Comparison Table - Your Portfolio vs Lazy Portfolios")
print("="*100)
print(all_comp_df.to_string())
Complete Comparison Table - Your Portfolio vs Lazy Portfolios
====================================================================================================
Your Portfolio Ray Dalio All Weather Warren Buffett 90/10 60/40 Stock/Bond Yale Endowment Shiller CAPE Value Cathie Wood ARK
CAGR 44.66% 9.01% 19.45% 13.95% 11.59% 14.33% 22.80%
Volatility 17.92% 9.21% 13.39% 9.76% 10.52% 13.89% 34.85%
Sharpe 215.11% 98.21% 139.47% 138.64% 109.48% 103.32% 76.31%
Sortino 325.77% 143.81% 207.14% 208.62% 161.73% 152.10% 112.39%
Max Drawdown -17.92% -10.88% -16.87% -11.75% -11.30% -17.07% -35.61%
Win Rate 56.92% 54.40% 56.80% 55.20% 54.93% 52.40% 52.13%
Best Day 5.81% 3.78% 8.37% 6.18% 5.83% 7.12% 14.36%
Worst Day -6.30% -2.34% -5.21% -3.49% -3.81% -5.38% -7.59%
9. Correlation & Diversification AnalysisΒΆ
InΒ [26]:
# Correlation matrix of portfolio holdings
corr_matrix = returns_data[available_tickers].corr()
# Rename columns to readable labels
labels = [ticker_to_label.get(t, t) for t in corr_matrix.columns]
fig = go.Figure(data=go.Heatmap(
z=corr_matrix.values,
x=labels, y=labels,
colorscale='RdBu_r',
zmid=0,
text=np.round(corr_matrix.values, 2),
texttemplate='%{text}',
textfont={"size": 8}
))
fig.update_layout(
title='Correlation Matrix of Portfolio Holdings',
width=1000, height=800
)
fig.show()
# Average correlation
avg_corr = corr_matrix.values[np.triu_indices_from(corr_matrix.values, k=1)].mean()
print(f"\nAverage pairwise correlation: {avg_corr:.3f}")
print(f"Diversification assessment: {'Good' if avg_corr < 0.3 else 'Moderate' if avg_corr < 0.5 else 'High concentration risk'}")
Average pairwise correlation: 0.123 Diversification assessment: Good
InΒ [27]:
# Geographic diversification
geo_allocation = {
'United States': sum(portfolio_raw[t]['weight'] for t in ['GOOGL', 'AMZN', 'KGC', 'OKLO', 'LEU', 'NVDA', 'WYFI', 'XOM', 'BIDU', 'BE', 'HAL', 'NBIS', 'PAAS', 'LAR', 'MELI', 'LAC', 'NIO'] if t in portfolio_raw),
'United Kingdom': sum(portfolio_raw[t]['weight'] for t in ['SGLN', 'BARC', 'RR', 'ACG'] if t in portfolio_raw),
'Australia': sum(portfolio_raw[t]['weight'] for t in ['PLS', 'DRO', 'LTR', 'LKY'] if t in portfolio_raw),
'China / HK': sum(portfolio_raw[t]['weight'] for t in ['BABA', 'XIAOMI'] if t in portfolio_raw),
'Canada': sum(portfolio_raw[t]['weight'] for t in ['ARG', 'PMET'] if t in portfolio_raw),
'Cash': cash_pct
}
fig = go.Figure(data=[go.Pie(
labels=list(geo_allocation.keys()),
values=list(geo_allocation.values()),
hole=0.3,
marker=dict(colors=['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#808080'])
)])
fig.update_layout(title='Geographic Allocation', width=700, height=500)
fig.show()
10. Summary & RecommendationsΒΆ
InΒ [28]:
# Generate Summary
def safe_float(val):
if hasattr(val, 'item'):
return val.item()
if hasattr(val, 'iloc'):
return float(val.iloc[0])
return float(val)
_cagr = safe_float(qs.stats.cagr(portfolio_returns)) * 100
_vol = safe_float(qs.stats.volatility(portfolio_returns)) * 100
_sharpe = safe_float(qs.stats.sharpe(portfolio_returns))
_maxdd = safe_float(qs.stats.max_drawdown(portfolio_returns)) * 100
print("=" * 80)
print("PORTFOLIO ANALYSIS SUMMARY")
print("=" * 80)
print()
print("π PORTFOLIO OVERVIEW")
print(f" Holdings: {len(portfolio_raw)} positions")
print(f" Cash: {cash_pct}%")
print(f" Estimated Value: ~Β£140,000")
print()
print("π PERFORMANCE METRICS")
print(f" CAGR: {_cagr:.2f}%")
print(f" Volatility: {_vol:.2f}%")
print(f" Sharpe Ratio: {_sharpe:.3f}")
print(f" Max Drawdown: {_maxdd:.2f}%")
print()
print("πͺ TOP 3 STRENGTHS")
print(" 1. Strong thematic diversification across AI, nuclear, lithium, gold, and China")
print(" 2. Significant unrealized gains in key positions (LAR +270%, PMET +186%, RR +149%, LTR +114%, NVDA +93%)")
print(f" 3. Cash buffer of {cash_pct}% provides dry powder for opportunities and reduces volatility")
print()
print("β οΈ TOP 3 AREAS FOR IMPROVEMENT")
print(" 1. High concentration in speculative/small-cap names increases tail risk")
print(" 2. Limited fixed income/bond allocation - consider adding for portfolio stability")
print(" 3. Several positions show negative unrealized P&L (AMZN, MELI, NNND, 1810, ACG, LKY)")
print()
print("π― ACTIONABLE RECOMMENDATIONS")
print(" 1. Consider trimming winners with >100% unrealized gains (LAR, PMET, RR, LTR) to lock in profits")
print(" 2. Review loss-making positions (LKY -75%, ACG -12%, NNND -12%) for thesis validity")
print(" 3. Add bond/treasury exposure (TLT, IEF) to improve risk-adjusted returns")
print(" 4. Monitor NVDA and GOOGL concentration - together they represent >11% of equity")
print(" 5. Consider deploying some cash into beaten-down quality names on further weakness")
print()
print("π THEMATIC EXPOSURE SUMMARY")
for theme, tickers in themes.items():
if theme == 'Cash':
continue
w = portfolio_df[portfolio_df['Ticker'].isin(tickers)]['Weight_%'].sum()
if w > 0:
print(f" {theme:35s}: {w:.1f}%")
print(f" {'Cash':35s}: {cash_pct:.1f}%")
print()
print("=" * 80)
print("Analysis complete. See HTML export for full interactive report.")
print("=" * 80)
================================================================================ PORTFOLIO ANALYSIS SUMMARY ================================================================================ π PORTFOLIO OVERVIEW Holdings: 29 positions Cash: 15.4% Estimated Value: ~Β£140,000 π PERFORMANCE METRICS CAGR: 44.66% Volatility: 17.92% Sharpe Ratio: 2.151 Max Drawdown: -17.92% πͺ TOP 3 STRENGTHS 1. Strong thematic diversification across AI, nuclear, lithium, gold, and China 2. Significant unrealized gains in key positions (LAR +270%, PMET +186%, RR +149%, LTR +114%, NVDA +93%) 3. Cash buffer of 15.4% provides dry powder for opportunities and reduces volatility β οΈ TOP 3 AREAS FOR IMPROVEMENT 1. High concentration in speculative/small-cap names increases tail risk 2. Limited fixed income/bond allocation - consider adding for portfolio stability 3. Several positions show negative unrealized P&L (AMZN, MELI, NNND, 1810, ACG, LKY) π― ACTIONABLE RECOMMENDATIONS 1. Consider trimming winners with >100% unrealized gains (LAR, PMET, RR, LTR) to lock in profits 2. Review loss-making positions (LKY -75%, ACG -12%, NNND -12%) for thesis validity 3. Add bond/treasury exposure (TLT, IEF) to improve risk-adjusted returns 4. Monitor NVDA and GOOGL concentration - together they represent >11% of equity 5. Consider deploying some cash into beaten-down quality names on further weakness π THEMATIC EXPOSURE SUMMARY AI / Tech : 24.3% Gold / Silver : 15.0% Nuclear Energy : 10.2% Lithium / Critical Minerals : 12.6% China Growth : 5.9% UK Equities : 9.2% Energy / Oil : 4.3% LatAm / EM : 1.5% Defense Tech : 1.5% Cash : 15.4% ================================================================================ Analysis complete. See HTML export for full interactive report. ================================================================================